77 research outputs found

    Classifying Cue Phrases in Text and Speech Using Machine Learning

    Full text link
    Cue phrases may be used in a discourse sense to explicitly signal discourse structure, but also in a sentential sense to convey semantic rather than structural information. This paper explores the use of machine learning for classifying cue phrases as discourse or sentential. Two machine learning programs (Cgrendel and C4.5) are used to induce classification rules from sets of pre-classified cue phrases and their features. Machine learning is shown to be an effective technique for not only automating the generation of classification rules, but also for improving upon previous results.Comment: 8 pages, PostScript File, to appear in the Proceedings of AAAI-9

    Cue Phrase Classification Using Machine Learning

    Full text link
    Cue phrases may be used in a discourse sense to explicitly signal discourse structure, but also in a sentential sense to convey semantic rather than structural information. Correctly classifying cue phrases as discourse or sentential is critical in natural language processing systems that exploit discourse structure, e.g., for performing tasks such as anaphora resolution and plan recognition. This paper explores the use of machine learning for classifying cue phrases as discourse or sentential. Two machine learning programs (Cgrendel and C4.5) are used to induce classification models from sets of pre-classified cue phrases and their features in text and speech. Machine learning is shown to be an effective technique for not only automating the generation of classification models, but also for improving upon previous results. When compared to manually derived classification models already in the literature, the learned models often perform with higher accuracy and contain new linguistic insights into the data. In addition, the ability to automatically construct classification models makes it easier to comparatively analyze the utility of alternative feature representations of the data. Finally, the ease of retraining makes the learning approach more scalable and flexible than manual methods.Comment: 42 pages, uses jair.sty, theapa.bst, theapa.st

    Plan recognition for space telerobotics

    Get PDF
    Current research on space telerobots has largely focused on two problem areas: executing remotely controlled actions (the tele part of telerobotics) or planning to execute them (the robot part). This work has largely ignored one of the key aspects of telerobots: the interaction between the machine and its operator. For this interaction to be felicitous, the machine must successfully understand what the operator is trying to accomplish with particular remote-controlled actions. Only with the understanding of the operator's purpose for performing these actions can the robot intelligently assist the operator, perhaps by warning of possible errors or taking over part of the task. There is a need for such an understanding in the telerobotics domain and an intelligent interface being developed in the chemical process design domain addresses the same issues

    When Does Disengagement Correlate with Performance in Spoken Dialog Computer Tutoring?

    Get PDF
    In this paper we investigate how student disengagement relates to two performance metrics in a spoken dialog computer tutoring corpus, both when disengagement is measured through manual annotation by a trained human judge, and also when disengagement is measured through automatic annotation by the system based on a machine learning model. First, we investigate whether manually labeled overall disengagement and six different disengagement types are predictive of learning and user satisfaction in the corpus. Our results show that although students’ percentage of overall disengaged turns negatively correlates both with the amount they learn and their user satisfaction, the individual types of disengagement correlate differently: some negatively correlate with learning and user satisfaction, while others don’t correlate with eithermetric at all. Moreover, these relationships change somewhat depending on student prerequisite knowledge level. Furthermore, using multiple disengagement types to predict learning improves predictive power. Overall, these manual label-based results suggest that although adapting to disengagement should improve both student learning and user satisfaction in computer tutoring, maximizing performance requires the system to detect and respond differently based on disengagement type. Next, we present an approach to automatically detecting and responding to user disengagement types based on their differing correlations with correctness. Investigation of ourmachine learningmodel of user disengagement shows that its automatic labels negatively correlate with both performance metrics in the same way as the manual labels. The similarity of the correlations across the manual and automatic labels suggests that the automatic labels are a reasonable substitute for the manual labels. Moreover, the significant negative correlations themselves suggest that redesigning ITSPOKE to automatically detect and respond to disengagement has the potential to remediate disengagement and thereby improve performance, even in the presence of noise introduced by the automatic detection process

    PARADISE: A Framework for Evaluating Spoken Dialogue Agents

    Full text link
    This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.Comment: 10 pages, uses aclap, psfig, lingmacros, time

    Improving Peer Feedback Prediction: The Sentence Level is Right

    Get PDF
    Recent research aims to automatically pre-dict whether peer feedback is of high qual-ity, e.g. suggests solutions to identified problems. While prior studies have fo-cused on peer review of papers, simi-lar issues arise when reviewing diagrams and other artifacts. In addition, previous studies have not carefully examined how the level of prediction granularity impacts both accuracy and educational utility. In this paper we develop models for predict-ing the quality of peer feedback regard-ing argument diagrams. We propose to perform prediction at the sentence level, even though the educational task is to la-bel feedback at a multi-sentential com-ment level. We first introduce a corpus annotated at a sentence level granularity, then build comment prediction models us-ing this corpus. Our results show that ag-gregating sentence prediction outputs to label comments not only outperforms ap-proaches that directly train on comment annotations, but also provides useful infor-mation for enhancing peer review systems with new functionality.

    Impact of Annotation Difficulty on Automatically Detecting Problem Localization of Peer-Review Feedback

    Get PDF
    We believe that providing assessment on students ’ reviewing performance will enable students to improve the quality of their peer reviews. We focus on assessing one particular aspect of the textual feedback contained in a peer review – the presence or absence of problem localization; feedback containing problem localization has been shown to be associated with increased understanding and implementation of the feedback. While in prior work we demonstrated the feasibility of learning to predict problem localization using linguistic features automatically extracted from textual feedback, we hypothesize that inter-annotator disagreement on labeling problem localization might impact both the accuracy and the content of the predictive models. To test this hypothesis, we compare the use of feedback examples where problem localization is labeled with differing levels of annotator agreement, for both training and testing our models. Our results show that when models are trained and tested using only feedback where annotators agree on problem localization, the models both perform with high accuracy, and contain rules involving just two simple linguistic features. In contrast, when training and testing using feedback examples where annotators both agree and disagree, the model performance slightly drops, but the learned rules capture more subtle patterns of problem localization. Keywords problem localization in text comments, data mining of peer reviews, inter-annotator agreement, natural langua

    Exploring affect-context dependencies for adaptive system development

    Get PDF
    We use χ2 to investigate the context dependency of student affect in our computer tutoring dialogues, targeting uncertainty in student answers in 3 automatically monitorable contexts. Our results show significant dependencies between uncertain answers and specific contexts. Identification and analysis of these dependencies is our first step in developing an adaptive version of our dialogue system.
    • …
    corecore